激光间质热疗法(LITT)是一种新型的微创治疗方法,用于烧蚀颅内结构,以治疗肠内颞叶癫痫(MTLE)。 LITT之前和之后的感兴趣区域(ROI)分割将使自动化病变定量能够客观地评估治疗疗效。深度学习技术,例如卷积神经网络(CNN)是ROI分割的最新解决方案,但在培训过程中需要大量注释的数据。但是,从LITT等新兴治疗中收集大型数据集是不切实际的。在本文中,我们提出了一个进行性脑部病变合成框架(PAVAE),以扩大训练数据集的数量和多样性。具体而言,我们的框架由两个顺序网络组成:掩模合成网络和掩模引导的病变合成网络。为了更好地利用外部信息来在网络培训期间提供额外的监督,我们设计了条件嵌入块(CEB)和掩模嵌入块(MEB),以将掩模的固有条件编码到功能空间中。最后,使用原始和合成病变图像对分割网络进行训练,以评估所提出的框架的有效性。实验结果表明,我们的方法可以实现逼真的合成结果,并在传统数据增强技术之上提高下游分割任务的性能。
translated by 谷歌翻译
Learning efficient and interpretable policies has been a challenging task in reinforcement learning (RL), particularly in the visual RL setting with complex scenes. While neural networks have achieved competitive performance, the resulting policies are often over-parameterized black boxes that are difficult to interpret and deploy efficiently. More recent symbolic RL frameworks have shown that high-level domain-specific programming logic can be designed to handle both policy learning and symbolic planning. However, these approaches rely on coded primitives with little feature learning, and when applied to high-dimensional visual scenes, they can suffer from scalability issues and perform poorly when images have complex object interactions. To address these challenges, we propose \textit{Differentiable Symbolic Expression Search} (DiffSES), a novel symbolic learning approach that discovers discrete symbolic policies using partially differentiable optimization. By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions, while also incorporating the strengths of neural networks for feature learning and optimization. Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more and scalable than state-of-the-art symbolic RL methods, with a reduced amount of symbolic prior knowledge.
translated by 谷歌翻译
Current image generation models struggle to reliably produce well-formed visual text. In this paper, we investigate a key contributing factor: popular text-to-image models lack character-level input features, making it much harder to predict a word's visual makeup as a series of glyphs. To quantify the extent of this effect, we conduct a series of controlled experiments comparing character-aware vs. character-blind text encoders. In the text-only domain, we find that character-aware models provide large gains on a novel spelling task (WikiSpell). Transferring these learnings onto the visual domain, we train a suite of image generation models, and show that character-aware variants outperform their character-blind counterparts across a range of novel text rendering tasks (our DrawText benchmark). Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words, despite training on far fewer examples.
translated by 谷歌翻译
Multi-Task Learning (MTL) has shown its importance at user products for fast training, data efficiency, reduced overfitting etc. MTL achieves it by sharing the network parameters and training a network for multiple tasks simultaneously. However, MTL does not provide the solution, if each task needs training from a different dataset. In order to solve the stated problem, we have proposed an architecture named TreeDNN along with it's training methodology. TreeDNN helps in training the model with multiple datasets simultaneously, where each branch of the tree may need a different training dataset. We have shown in the results that TreeDNN provides competitive performance with the advantage of reduced ROM requirement for parameter storage and increased responsiveness of the system by loading only specific branch at inference time.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
基于方面的情感分析(ABSA)涉及审查句子对给定方面的情感极性的识别。 RNN,LSTM和GRU等深度学习顺序模型是推断情感极性的当前最新方法。这些方法可以很好地捕获评论句子的单词之间的上下文关系。但是,这些方法在捕获长期依赖性方面微不足道。注意机制仅专注于句子的最关键部分,从而发挥着重要作用。在ABSA的情况下,方面位置起着至关重要的作用。在确定对该方面的情绪的同时,近乎方面的单词会做出更多的贡献。因此,我们提出了一种使用依赖解析树捕获基于位置信息的方法,并有助于注意机制。使用这种类型的位置信息通过简单的基于单词距离的位置增强了深度学习模型的性能。我们对Semeval'14数据集进行了实验,以证明基于ABSA的基于ABS的依赖关系的效果。
translated by 谷歌翻译
我们考虑在线模仿学习(OIL),其中的任务是找到一项通过与环境的积极互动来模仿专家的行为的政策。我们旨在通过分析最流行的石油算法之一匕首来弥合石油政策优化算法之间的差距。具体而言,如果一类政策足以包含专家政策,我们证明匕首会持续遗憾。与以前需要损失的界限不同,我们的结果只需要较弱的假设,即损失相对于策略的足够统计数据(而不是其参数化)。为了确保对更广泛的政策和损失类别的收敛,我们以额外的正则化项增强了匕首。特别是,我们提出了一个遵循定制领导者(FTRL)的变体及其用于石油的自适应变体,并开发了与FTL的内存需求相匹配的记忆效率实现。假设损失的功能是平稳的,并且相对于政策参数凸出,我们还证明,FTRL对任何足够表达的政策类别都持续遗憾,同时保留了$ O(\ sqrt {t})$,在最坏的情况下遗憾案子。我们通过实验对合成和高维控制任务的实验证明了这些算法的有效性。
translated by 谷歌翻译
变压器模型的缩放属性引起了很多兴趣。但是,在研究不同电感偏差和模型体系结构的缩放特性的效果的前提下,没有做太多事情。模型体系结构的规模不同吗?如果是这样,归纳偏置如何影响缩放行为?这如何影响上游(预训练)和下游(转移)?本文对十种不同模型体系结构的缩放行为进行了系统研究,例如变压器,交换机变压器,通用变压器,动态卷积,表演者以及最近提出的MLP混合物。通过广泛的实验,我们表明(1)架构在执行缩放时确实是一个重要的考虑因素,并且(2)最佳性能模型可以在不同的尺度上波动。我们认为,这项工作中概述的发现对当前在社区中评估模型架构的方式具有重要意义。
translated by 谷歌翻译
当前文献中可用的卷积神经网络(CNN)方法旨在主要与低分辨率图像合作。当应用于非常大的图像时,与GPU记忆相关的挑战,比语义通信所需的较小的接受场以及需要结合多尺度特征的需求。但是,可以减少输入图像的分辨率,但要大量关键信息丢失。基于概述的问题,我们引入了一个新的研究问题,以培训CNN模型为非常大的图像,并介绍“超级数据集”,这是一个简单而代表性的基准数据集,用于此任务。 Ultramnist是使用流行的MNIST数字设计的,并添加了更多的复杂性,以很好地复制现实世界问题的挑战。我们提出了两个问题的两个变体:“超级分类”和“预算意识到的超级名人分类”。标准的超快分类基准旨在促进新型CNN培训方法的开发,从而有效利用最佳可用GPU资源。预算感知的变体旨在促进在受限GPU记忆下工作的方法的开发。为了开发竞争解决方案,我们为标准基准及其预算感知变体提供了几种基线模型。我们研究了减少分辨率对涉及流行最先进模型中预审预定型骨架的基线模型的性能的影响和目前的结果。最后,借助提出的基准数据集和基线,我们希望为新一代的CNN方法铺平地面,适合以有效和资源的方式处理大型图像。
translated by 谷歌翻译
多对象跟踪(MOT)是一项具有挑战性的任务,涉及检测场景中的对象并通过一系列帧跟踪它们。由于时间阻塞以及一系列图像序列的变化,评估此任务很困难。 Kitti等数据集上基准MOT方法的主要评估度量已成为高阶跟踪准确性(HOTA)度量,该指标能够更好地描述MOTA,DETA和IDF1等指标的性能。点检测和跟踪是一项密切相关的任务,可以将其视为对象检测的特殊情况。但是,评估检测任务本身(点距离与边界框重叠)存在差异。当包括时间维度和多视图方案时,评估任务变得更加复杂。在这项工作中,我们提出了一个多视图高阶跟踪指标(MVHOTA),以确定多点(多企业和多级)检测的准确性,同时考虑到时间和空间关联。 MVHOTA可以解释为检测,关联和对应准确性的几何平均值,从而为每个因素提供相等的权重。我们通过以前有组织的医疗挑战中的公开内窥镜检测数据集证明了用例。此外,我们与此用例的其他调整后的MOT指标进行比较,讨论MVHOTA的属性,并展示提出的对应准确性和闭塞指数如何促进对闭塞处理方法的分析。该代码将公开可用。
translated by 谷歌翻译